1,050 research outputs found

    Sparse Neural Network Training with In-Time Over-Parameterization

    Get PDF

    Sparse Neural Network Training with In-Time Over-Parameterization

    Get PDF

    A NDT&E Methodology Based on Magnetic Representation for Surface Topography of Ferromagnetic Materials

    Get PDF
    Accurate evaluation is the final aim of nondestructive testing (NDT). However, the present electromagnetic NDT methods are commonly used to check the existence of defects, and all the tested targets only consist of concave defects (i.e., section-loss defects), such as holes, cracks, or corrosions, failing to evaluate the tested surface topography, which mainly consists of concave-shaped and bump-shaped features. At present, it is accepted that the commonly observed signals of the defects mainly manifest themselves in a single-/double-peak wave and their up/down directions of the peak wave can be easily changed just by changing the directions of either applied magnetization or pick-up units even for one defect. Unlike the present stylus and optical methods for surface topography inspections, a new electromagnetic NDT and evaluation (NDT&E) methodology is provided based on the accurate magnetic representation of surface topography, in which a concave-shaped feature produces “positive” magnetic flux leakages (MFLs) and therefore forms a “raised” signal wave but a bump-shaped feature generates “negative” magnetic fields and therefore leads to a “sunken” signal wave. In this case, the corresponding relationships between wave features and surface topography are presented and the relevant evaluation system for testing surface topography (concave, bumped, and flat features) is built. The provided methodology was analyzed and verified by finite element and experimental methods. Meanwhile, the different dimension parameters of height/depth and width of surface topography are further studied

    A New Method of SHM for Steel Wire Rope and its Apparatus

    Get PDF
    Steel wire ropes often operate in a high‐speed swing status in practical engineering, and the reliable structural health monitoring (SHM) for them directly relates to human lives; however, they are usually beyond the capability of present portable magnet magnetic flux leakage (MFL) sensors based on yoke magnetic method due to its strong magnetic force and large weight. Unlike the yoke method, a new method of SHM for steel wire rope is proposed by theoretical analyses and also verified by finite element method (FEM) and experiments, which features much weaker magnetic interaction force and similar magnetization capability compared to the traditional yoke method. Meanwhile, the relevant detection apparatus or sensor is designed by simulation optimization. Furthermore, experimental comparisons between the new and yoke sensors for steel wire rope inspection are also conducted, which successfully confirm the characterization of smaller magnetic interaction force, less wear, and damage in contrast with traditional technologies. Finally, methods for SHM of steel wire rope and apparatus are discussed, which demonstrate the good practicability for SHM of steel wire rope under poor working conditions

    The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter

    Full text link
    Large pre-trained transformers are show-stealer in modern-day deep learning, and it becomes crucial to comprehend the parsimonious patterns that exist within them as they grow in scale. With exploding parameter counts, Lottery Ticket Hypothesis (LTH) and its variants, have lost their pragmatism in sparsifying them due to high computation and memory bottleneck of the repetitive train-prune-retrain routine of iterative magnitude pruning (IMP) which worsens with increasing model size. In this paper, we comprehensively study induced sparse patterns across multiple large pre-trained vision and language transformers. We propose the existence of -- essential sparsity defined with a sharp dropping point beyond which the performance declines much faster w.r.t the rise of sparsity level, when we directly remove weights with the smallest magnitudes in one-shot. In the sparsity-performance curve We also present an intriguing emerging phenomenon of abrupt sparsification during the pre-training of BERT, i.e., BERT suddenly becomes heavily sparse in pre-training after certain iterations. Moreover, our observations also indicate a counter-intuitive finding that BERT trained with a larger amount of pre-training data tends to have a better ability to condense knowledge in comparatively relatively fewer parameters. Lastly, we investigate the effect of the pre-training loss on essential sparsity and discover that self-supervised learning (SSL) objectives trigger stronger emergent sparsification properties than supervised learning (SL). Our codes are available at \url{https://github.com/VITA-Group/essential\_sparsity}
    corecore